188 research outputs found

    Z Unification Tools in Generic Formaliser

    Get PDF
    We describe some prototype tools for performing unification (i.e. deriving the least common refinement) of simple Z specifications. The techniques used are those described in http://alethea.ukc.ac.uk/Dept/Computing/Research/NDS/consistency/cccfpsiZ.html on viewpoint specification in Z; the tools have been implemented in Generic http://public.logica.com/formaliser (a product of Logica UK Limited). UKC Computing Laboratory technical report 10-97. The prototype tools themselves (in the form of Generic Formaliser grammars) will be made available later

    Grey Box Data Refinement

    Get PDF
    We introduce the concepts of grey box and display box data types. These make explicit the idea that state variables in abstract data types are not always hidden. Programming languages have visibility rules which make representations observable and modifiable. Specifications in model-based notations may have implicit assumptions about visible state components, or are used in contexts where the representation does matter. Grey box data types are like the ``standard'' black box data types, except that they contain explicit subspaces of the state which are modifiable and observable. Display boxes indirectly observe the state by adding displays to a black box. Refinement rules for both these alternative data types are given, based on their interpretations as black boxes

    A Generator for Turing Machine Simulating Programs - User's Manual -

    Get PDF
    By means of some sample dialogues we show the use of a program to generate Berkeley Pascal programs from Turing machine descriptions such that these Pascal programs simulate the behavior of the corresponding Turing machines

    Big Data Refinement

    Get PDF
    "Big data" has become a major area of research and associated funding, as well as a focus of utopian thinking. In the still growing research community, one of the favourite optimistic analogies for data processing is that of the oil refinery, extracting the essence out of the raw data. Pessimists look for their imagery to the other end of the petrol cycle, and talk about the "data exhausts" of our society. Obviously, the refinement community knows how to do "refining". This paper explores the extent to which notions of refinement and data in the formal methods community relate to the core concepts in "big data". In particular, can the data refinement paradigm can be used to explain aspects of big data processing

    9 Squares: Framing Data Privacy Issues

    Get PDF
    open access articleIn order to frame discussions on data privacy in varied contexts, this paper introduces a categorisation of personal data along two dimensions. Each of the nine resulting categories offers a significantly different flavour of issues in data privacy. Some issues can also be perceived as a tension along a boundary between different categories. The first dimension is data ownership: who holds or publishes the data. The three possibilities are “me”, i.e. the data subject; “us”, where the data subject is part of a community; and “them”, where the data subject is indeed a subject only. The middle category contains social networks as the most interesting instance. The amount of control for the data subject moves from complete control in the “me” category to very little at all in the “them” square – but the other dimension also plays a role in that. The second dimension has three possibilities, too, focusing on the type of personal data recorded: “attributes” are what would traditionally be found in databases, and what one might think of first for “data protection”. The second type of data is “stories”, which is personal data (explicitly) produced by the data subjects, such as emails, pictures, and social network posts. The final type is “behaviours”, which is (implicitly) generated personal data, such as locations and browsing histories. The data subject has very little control over this data, even in the “us” category. This lack of control, which is closely related to the business models of the “us” category, is likely the major data privacy problem of our time

    Testing refinements of state-based formal specifications

    Get PDF

    Strategies for Consistency Checking

    Get PDF
    Viewpoint models of system development are becoming increasingly important. A major requirement for viewpoints modelling is to be able to check that the multiple viewpoint specifications are consistent with one another. The work presented in this report makes a contribution to this task. Our work is particularly influenced by the viewpoints model used in the ISO standardisation architecture for Open Distributed Processing. This report focuses on the issue of strategies for consistency checking. In particular, it considers how global consistency (between any arbitrary number of viewpoints) can be obtained from binary consistency (between two viewpoints). The report documents a number of different classes of consistency checking, from those that are very poorly behaved to those that are very well behaved. The report is intended as a companion to the work presented in [1] and it should be read in association with this document. In particular, the body of this report is a single chapter which should be viewed as additional to the chapters included in [1]. This report contains complete proofs of all relevant results, even though some of the results are obvious and some of the proofs are trivial. A much compressed version of the report is being submitted for publication. Thus, the main value of this report is as a reference document for readers who require a complete presentation of the technical. [1] E. Boiten, H. Bowman, J. Derrick and M. Steen ''Cross Viewpoint Consistency in Open Distributed Processing (Intra Language Consistency)'', Technical Report, Computing Laboratory, University of Kent at Canterbury, report No. 8-95, 1995. Phone: +44 1227 827913, Fax: 44 1227 762811 Email: H.Bowman,E.A.Boiten,J.Derrick,[email protected]

    Unification and multiple views of data in Z

    Get PDF
    This paper discusses the unification of Z specifications, in particular specifications that maintain different representations of what is intended to be the same datatype. Essentially this amounts to integrating previously published techniques for combining multiple viewpoints and for combining multiple views. It is shown how the technique proposed in this paper indeed produces unifications, and that it generalises both previous techniques

    Specifying and Refining Internal Operations in Z

    Get PDF
    Abstract An important aspect in the specification of distributed systems is the role of the internal (or unobservable) operation. Such operations are not part of the interface to the environment (i.e. the user cannot invoke them), however, they are essential to our understanding and correct modelling of the system. In this paper we are interested in the use of the formal specification notation Z for the description of distributed systems. Various conventions have been employed to model internal operations when specifying such systems in Z. If internal operations are distinguished in the specification notation, then refinement needs to deal with internal operations in appropriate ways. Using an example of a telecommunications protocol we show that standard Z refinement is inappropriate for refining a system when internal operations are specified explicitly. We present a generalization of Z refinement, called weak refinement, which treats internal operations differently from observable operations when refining a system. We discuss the role of internal operations in a Z specification, and in particular whether an equivalent specification not containing internal operations can be found. The nature of divergence through livelock is also discussed. Keywords: Z; Refinement; Distributed Systems; Internal Operations; Process Algebras; Concurrency

    Refinement for Probabilistic Systems with Nondeterminism

    Full text link
    Before we combine actions and probabilities two very obvious questions should be asked. Firstly, what does "the probability of an action" mean? Secondly, how does probability interact with nondeterminism? Neither question has a single universally agreed upon answer but by considering these questions at the outset we build a novel and hopefully intuitive probabilistic event-based formalism. In previous work we have characterised refinement via the notion of testing. Basically, if one system passes all the tests that another system passes (and maybe more) we say the first system is a refinement of the second. This is, in our view, an important way of characterising refinement, via the question "what sort of refinement should I be using?" We use testing in this paper as the basis for our refinement. We develop tests for probabilistic systems by analogy with the tests developed for non-probabilistic systems. We make sure that our probabilistic tests, when performed on non-probabilistic automata, give us refinement relations which agree with for those non-probabilistic automata. We formalise this property as a vertical refinement.Comment: In Proceedings Refine 2011, arXiv:1106.348
    • 

    corecore